Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
Target-specific stance detection on social media, which aims at classifying a textual data instance such as a post or a comment into a stance class of a target issue, has become an emerging opinion mining paradigm of importance. An example application would be to overcome vaccine hesitancy in combating the coronavirus pandemic. However, existing stance detection strategies rely merely on the individual instances which cannot always capture the expressed stance of a given target. In response, we address a new task called conversational stance detection which is to infer the stance towards a given target (e.g., COVID-19 vaccination) when given a data instance and its corresponding conversation thread. To tackle the task, we first propose a benchmarking conversational stance detection (CSD) dataset with annotations of stances and the structures of conversation threads among the instances based on six major social media platforms in Hong Kong. To infer the desired stances from both data instances and conversation threads, we propose a model called Branch-BERT that incorporates contextual information in conversation threads. Extensive experiments on our CSD dataset show that our proposed model outperforms all the baseline models that do not make use of contextual information. Specifically, it improves the F1 score by 10.3% compared with the state-of-the-art method in the SemEval-2016 Task 6 competition. This shows the potential of incorporating rich contextual information on detecting target-specific stances on social media platforms and implies a more practical way to construct future stance detection tasks.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
本文提出了一种新型的元元素算法,白鹭群优化算法(ESOA),其灵感来自两种乌格莱特物种(伟大的乌鸦和雪绿色的艾格莱特)狩猎行为。ESOA由三个主要组成部分组成:静坐战略,积极的策略以及判别条件。将ESOA在36个基准函数以及2个工程问题上的性能与粒子群优化(PSO),遗传算法(GA),差分进化(DE),灰狼优化器(GWO)和Harris Hawks优化(HHO)进行了比较。。结果证明了ESOA的卓越有效性和鲁棒性。可以从https://github.com/knightsll/egret_swarm_optimization_algorithm中检索此工作中使用的源代码;https://ww2.mathworks.cn/matlabcentral/fileexchange/115595-Egret-swarm-optimization-algorithm-esoa。
translated by 谷歌翻译
制造过程中的一个关键方面是用于缺陷和缺陷的制造部件的视觉质量检测。只有人类的视觉检查可能非常耗时和费力,并且是一个重要的瓶颈,特别是对于高吞吐制造场景。鉴于深度学习领域的显着进展,自动化视觉质量检验可能导致制造过程中的高效和可靠地检测缺陷和缺陷。然而,深度学习驱动的视觉检查方法通常需要大量的计算资源,从而限制吞吐量,并充当瓶颈,以实现智能工厂的广泛采用。在这项研究中,我们调查了利用机器驱动的设计探索方法来创建TinyDefectNet,这是一种高度紧凑的深度卷积网络架构,适用于高通量制造视觉质量检验。 TinyDefectNet包括仅〜427k的参数,并且具有〜97米的计算复杂性,但实现了最先进的架构的检测准确性,用于在Neu缺陷基准数据集上进行表面缺陷检测的任务。因此,TinyDefectNet可以在52 $ \ times $较低的架构复杂度和11x较低的计算复杂度下实现相同的检测性能。此外,使用AMD Zendnn Accelerator库,在AMD EPYC 7R32上部署了TinyDefectNet在AMD EPY 7R32上部署了7.6倍的吞吐量更快的吞吐量。最后,进行了解释性的性能验证策略,以确保TinyDefectNet展出了正确的决策行为,以改善运营商和检查员对其使用的信任。
translated by 谷歌翻译
时间一致的深度估计对于诸如增强现实之类的实时应用至关重要。虽然立体声深度估计已经接受了显着的注意,导致逐帧的改进,虽然相对较少的工作集中在跨越帧的时间一致性。实际上,基于我们的分析,当前立体声深度估计技术仍然遭受不良时间一致性。由于并发对象和摄像机运动,在动态场景中稳定深度是挑战。在在线设置中,此过程进一步加剧,因为只有过去的帧可用。在本文中,我们介绍了一种技术,在线设置中的动态场景中产生时间一致的深度估计。我们的网络增强了具有新颖运动和融合网络的当前每帧立体声网络。通过预测每个像素SE3变换,运动网络占对象和相机运动。融合网络通过用回归权重聚合当前和先前预测来提高预测的一致性。我们在各种数据集中进行广泛的实验(合成,户外,室内和医疗)。在零射泛化和域微调中,我们证明我们所提出的方法在数量和定性的时间稳定和每个帧精度方面优于竞争方法。我们的代码将在线提供。
translated by 谷歌翻译
外科模拟器不仅允许规划和培训复杂的程序,而且还提供了为算法开发产生结构化数据的能力,这可以应用于图像引导的计算机辅助干预措施。虽然在外科医生或数据生成引擎的发展培训平台上,但我们知识的这两个功能尚未一起提供。我们展示了我们的开发成本效益和协同框架,命名为异步多体框架加(AMBF +),它与练习其外科技能的用户同时生成下游算法开发的数据。 AMBF +在虚拟现实(VR)设备上提供立体显示器,并触觉外科仿真的触觉反馈。它还可以生成不同的数据,例如对象姿势和分段图。 AMBF +采用柔性插件设置设计,可允许仿真仿真不同外科手术。我们将AMBF +的一个用例显示为虚拟钻探模拟器,用于横向颅底手术,用户可以使用虚拟手术钻积极地修改患者解剖结构。我们进一步演示如何生成的数据可用于验证和培训下游计算机视觉算法
translated by 谷歌翻译
最近的研究表明,神经组合优化(NCO)在许多组合优化问题(如路由)中具有优于传统算法的优点,但是对于涉及相互条件的动作空间的包装,诸如打包的更加复杂的优化任务的效率较低。在本文中,我们提出了一种经常性的条件查询学习(RCQL)方法来解决2D和3D包装问题。我们首先通过经常性编码器嵌入状态,然后采用先前操作的条件查询注意。条件查询机制填充了学习步骤之间的信息差距,将问题塑造为Markov决策过程。从复发中受益,单个RCQL模型能够处理不同尺寸的包装问题。实验结果表明,RCQL可以有效地学习用于离线和在线条带包装问题(SPP)的强烈启发式,优于空间利用率范围广泛的基线。 RCQL与最先进的方法相比,在离线2D 40盒案例中将平均箱间隙比率降低1.83%,3.84%。同时,我们的方法还实现了5.64%的空间利用率,对于1000件物品的空间利用率比现有技术更高。
translated by 谷歌翻译
The need for efficient computational screening of molecular candidates that possess desired properties frequently arises in various scientific and engineering problems, including drug discovery and materials design. However, the large size of the search space containing the candidates and the substantial computational cost of high-fidelity property prediction models makes screening practically challenging. In this work, we propose a general framework for constructing and optimizing a virtual screening (HTVS) pipeline that consists of multi-fidelity models. The central idea is to optimally allocate the computational resources to models with varying costs and accuracy to optimize the return-on-computational-investment (ROCI). Based on both simulated as well as real data, we demonstrate that the proposed optimal HTVS framework can significantly accelerate screening virtually without any degradation in terms of accuracy. Furthermore, it enables an adaptive operational strategy for HTVS, where one can trade accuracy for efficiency.
translated by 谷歌翻译